Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 16.547
Filtrar
1.
Sci Rep ; 14(1): 7764, 2024 04 02.
Artigo em Inglês | MEDLINE | ID: mdl-38565622

RESUMO

Sound is sensed by the ear but can also be felt on the skin, by means of vibrotactile stimulation. Only little research has addressed perceptual implications of vibrotactile stimulation in the realm of music. Here, we studied which perceptual dimensions of music listening are affected by vibrotactile stimulation and whether the spatial segregation of vibrations improves vibrotactile stimulation. Forty-one listeners were presented with vibrotactile stimuli via a chair's surfaces (left and right arm rests, back rest, seat) in addition to music presented over headphones. Vibrations for each surface were derived from individual tracks of the music (multi condition) or conjointly by a mono-rendering, in addition to incongruent and headphones-only conditions. Listeners evaluated unknown music from popular genres according to valence, arousal, groove, the feeling of being part of a live performance, the feeling of being part of the music, and liking. Results indicated that the multi- and mono vibration conditions robustly enhanced the nature of the musical experience compared to listening via headphones alone. Vibrotactile enhancement was strong in the latent dimension of 'musical engagement', encompassing the sense of being a part of the music, arousal, and groove. These findings highlight the potential of vibrotactile cues for creating intensive musical experiences.


Assuntos
Música , Som , Vibração , Emoções , Sinais (Psicologia) , Percepção Auditiva/fisiologia
2.
Sci Rep ; 14(1): 7627, 2024 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-38561365

RESUMO

This study aimed to investigate the effects of reproducing an ultrasonic sound above 20 kHz on the subjective impressions of water sounds using psychological and physiological information obtained by the semantic differential method and electroencephalography (EEG), respectively. The results indicated that the ultrasonic component affected the subjective impression of the water sounds. In addition, regarding the relationship between psychological and physiological aspects, a moderate correlation was confirmed between the EEG change rate and subjective impressions. However, no differences in characteristics were found between with and without the ultrasound component, suggesting that ultrasound does not directly affect the relationship between subjective impressions and EEG energy at the current stage. Furthermore, the correlations calculated for the left and right channels in the occipital region differed significantly, which suggests functional asymmetry for sound perception between the right and left hemispheres.


Assuntos
Audição , Som , Eletroencefalografia/métodos , Percepção Auditiva/fisiologia , Estimulação Acústica
3.
Sci Rep ; 14(1): 8814, 2024 04 16.
Artigo em Inglês | MEDLINE | ID: mdl-38627479

RESUMO

Rhythm perception and synchronisation is musical ability with neural basis defined as the ability to perceive rhythm in music and synchronise body movements with it. The study aimed to check the errors of synchronisation and physiological response as a reaction of the subjects to metrorhythmic stimuli of synchronous and pseudosynchronous stimulation (synchronisation with an externally controlled rhythm, but in reality controlled or produced tone by tapping) Nineteen subjects without diagnosed motor disorders participated in the study. Two tests were performed, where the electromyography signal and reaction time were recorded using the NORAXON system. In addition, physiological signals such as electrodermal activity and blood volume pulse were measured using the Empatica E4. Study 1 consisted of adapting the finger tapping test in pseudosynchrony with a given metrorhythmic stimulus with a selection of preferred, choices of decreasing and increasing tempo. Study 2 consisted of metrorhythmic synchronisation during the heel stomping test. Numerous correlations and statistically significant parameters were found between the response of the subjects with respect to their musical education, musical and sports activities. Most of the differentiating characteristics shown evidence of some group division in the undertaking of musical activities. The use of detailed analyses of synchronisation errors can contribute to the development of methods to improve the rehabilitation process of subjects with motor dysfunction, and this will contribute to the development of an expert system that considers personalised musical preferences.


Assuntos
Música , Esportes , Humanos , Movimento/fisiologia , Tempo de Reação , Percepção Auditiva/fisiologia , Estimulação Acústica
4.
Sci Rep ; 14(1): 8739, 2024 04 16.
Artigo em Inglês | MEDLINE | ID: mdl-38627572

RESUMO

Inspired by recent findings in the visual domain, we investigated whether the stimulus-evoked pupil dilation reflects temporal statistical regularities in sequences of auditory stimuli. We conducted two preregistered pupillometry experiments (experiment 1, n = 30, 21 females; experiment 2, n = 31, 22 females). In both experiments, human participants listened to sequences of spoken vowels in two conditions. In the first condition, the stimuli were presented in a random order and, in the second condition, the same stimuli were presented in a sequence structured in pairs. The second experiment replicated the first experiment with a modified timing and number of stimuli presented and without participants being informed about any sequence structure. The sound-evoked pupil dilation during a subsequent familiarity task indicated that participants learned the auditory vowel pairs of the structured condition. However, pupil diameter during the structured sequence did not differ according to the statistical regularity of the pair structure. This contrasts with similar visual studies, emphasizing the susceptibility of pupil effects during statistically structured sequences to experimental design settings in the auditory domain. In sum, our findings suggest that pupil diameter may serve as an indicator of sound pair familiarity but does not invariably respond to task-irrelevant transition probabilities of auditory sequences.


Assuntos
Pupila , Som , Feminino , Humanos , Pupila/fisiologia , Reconhecimento Psicológico , Percepção Auditiva/fisiologia
5.
Nat Commun ; 15(1): 3093, 2024 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-38600118

RESUMO

Sensory-motor interactions in the auditory system play an important role in vocal self-monitoring and control. These result from top-down corollary discharges, relaying predictions about vocal timing and acoustics. Recent evidence suggests such signals may be two distinct processes, one suppressing neural activity during vocalization and another enhancing sensitivity to sensory feedback, rather than a single mechanism. Single-neuron recordings have been unable to disambiguate due to overlap of motor signals with sensory inputs. Here, we sought to disentangle these processes in marmoset auditory cortex during production of multi-phrased 'twitter' vocalizations. Temporal responses revealed two timescales of vocal suppression: temporally-precise phasic suppression during phrases and sustained tonic suppression. Both components were present within individual neurons, however, phasic suppression presented broadly regardless of frequency tuning (gating), while tonic was selective for vocal frequencies and feedback (prediction). This suggests that auditory cortex is modulated by concurrent corollary discharges during vocalization, with different computational mechanisms.


Assuntos
Córtex Auditivo , Animais , Córtex Auditivo/fisiologia , Neurônios/fisiologia , Retroalimentação Sensorial/fisiologia , Retroalimentação , Callithrix/fisiologia , Vocalização Animal/fisiologia , Percepção Auditiva/fisiologia , Estimulação Acústica
6.
J Exp Child Psychol ; 242: 105897, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38461557

RESUMO

Previous studies have widely demonstrated that individuals with attention-deficit/hyperactivity disorder (ADHD) exhibit deficits in conflict control tasks. However, there is limited evidence regarding the performance of children with ADHD in cross-modal conflict processing tasks. The current study aimed to investigate whether children with ADHD have poor conflict control, which has an impact on sensory dominance effects at different levels of information processing under the influence of visual similarity. A total of 82 children aged 7 to 14 years, including 41 children with ADHD and 41 age- and sex-matched typically developing (TD) children, were recruited. We used the 2:1 mapping paradigm to separate levels of conflict, and the congruency of the audiovisual stimuli was divided into three conditions. In C trials, the target stimulus and the distractor stimulus were identical, and the bimodal stimuli corresponded to the same response keys. In PRIC trials, the distractor stimulus differed from the target stimulus and did not correspond to any response keys. In RIC trials, the distractor stimulus differed from the target stimulus, and the bimodal stimuli corresponded to different response keys. Therefore, we explicitly differentiated cross-modal conflict into a preresponse level (PRIC > C), corresponding to the encoding process, and a response level (RIC > PRIC), corresponding to the response selection process. Our results suggested that auditory distractors caused more interference during visual processing than visual distractors caused during auditory processing (i.e., typical auditory dominance) at the preresponse level regardless of group. However, visual dominance effects were observed in the ADHD group, whereas no visual dominance effects were observed in the TD group at the response level. A possible explanation is that the increased interference effects due to visual similarity and children with ADHD made it more difficult to control conflict when simultaneously confronted with incongruent visual and auditory inputs. The current study highlights how children with ADHD process cross-modal conflicts at multiple levels of information processing, thereby shedding light on the mechanisms underlying ADHD.


Assuntos
Transtorno do Deficit de Atenção com Hiperatividade , Criança , Humanos , Percepção Visual/fisiologia , Percepção Auditiva/fisiologia
7.
Dev Neurobiol ; 84(2): 47-58, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38466218

RESUMO

In sexually dimorphic zebra finches (Taeniopygia guttata), only males learn to sing their father's song, whereas females learn to recognize the songs of their father or mate but cannot sing themselves. Memory of learned songs is behaviorally expressed in females by preferring familiar songs over unfamiliar ones. Auditory association regions such as the caudomedial mesopallium (CMM; or caudal mesopallium) have been shown to be key nodes in a network that supports preferences for learned songs in adult females. However, much less is known about how song preferences develop during the sensitive period of learning in juvenile female zebra finches. In this study, we used blood-oxygen level-dependent (BOLD) functional magnetic resonance imaging (fMRI) to trace the development of a memory-based preference for the father's song in female zebra finches. Using BOLD fMRI, we found that only in adult female zebra finches with a preference for learned song over novel conspecific song, neural selectivity for the father's song was localized in the thalamus (dorsolateral nucleus of the medial thalamus; part of the anterior forebrain pathway, AFP) and in CMM. These brain regions also showed a selective response in juvenile female zebra finches, although activation was less prominent. These data reveal that neural responses in CMM, and perhaps also in the AFP, are shaped during development to support behavioral preferences for learned songs.


Assuntos
Tentilhões , Vocalização Animal , Masculino , Animais , Feminino , Vocalização Animal/fisiologia , alfa-Fetoproteínas/metabolismo , Tentilhões/metabolismo , Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Prosencéfalo/metabolismo , Imageamento por Ressonância Magnética/métodos
8.
Commun Biol ; 7(1): 317, 2024 Mar 13.
Artigo em Inglês | MEDLINE | ID: mdl-38480875

RESUMO

Primate communication relies on multimodal cues, such as vision and audition, to facilitate the exchange of intentions, enable social interactions, avoid predators, and foster group cohesion during daily activities. Understanding the integration of facial and vocal signals is pivotal to comprehend social interaction. In this study, we acquire whole-brain ultra-high field (9.4 T) fMRI data from awake marmosets (Callithrix jacchus) to explore brain responses to unimodal and combined facial and vocal stimuli. Our findings reveal that the multisensory condition not only intensifies activations in the occipito-temporal face patches and auditory voice patches but also engages a more extensive network that includes additional parietal, prefrontal and cingulate areas, compared to the summed responses of the unimodal conditions. By uncovering the neural network underlying multisensory audiovisual integration in marmosets, this study highlights the efficiency and adaptability of the marmoset brain in processing facial and vocal social signals, providing significant insights into primate social communication.


Assuntos
Callithrix , Imageamento por Ressonância Magnética , Animais , Callithrix/fisiologia , Visão Ocular , Mapeamento Encefálico , Percepção Auditiva/fisiologia
9.
eNeuro ; 11(3)2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38467426

RESUMO

Auditory perception can be significantly disrupted by noise. To discriminate sounds from noise, auditory scene analysis (ASA) extracts the functionally relevant sounds from acoustic input. The zebra finch communicates in noisy environments. Neurons in their secondary auditory pallial cortex (caudomedial nidopallium, NCM) can encode song from background chorus, or scenes, and this capacity may aid behavioral ASA. Furthermore, song processing is modulated by the rapid synthesis of neuroestrogens when hearing conspecific song. To examine whether neuroestrogens support neural and behavioral ASA in both sexes, we retrodialyzed fadrozole (aromatase inhibitor, FAD) and recorded in vivo awake extracellular NCM responses to songs and scenes. We found that FAD affected neural encoding of songs by decreasing responsiveness and timing reliability in inhibitory (narrow-spiking), but not in excitatory (broad-spiking) neurons. Congruently, FAD decreased neural encoding of songs in scenes for both cell types, particularly in females. Behaviorally, we trained birds using operant conditioning and tested their ability to detect songs in scenes after administering FAD orally or injected bilaterally into NCM. Oral FAD increased response bias and decreased correct rejections in females, but not in males. FAD in NCM did not affect performance. Thus, FAD in the NCM impaired neuronal ASA but that did not lead to behavioral disruption suggesting the existence of resilience or compensatory responses. Moreover, impaired performance after systemic FAD suggests involvement of other aromatase-rich networks outside the auditory pathway in ASA. This work highlights how transient estrogen synthesis disruption can modulate higher-order processing in an animal model of vocal communication.


Assuntos
Córtex Auditivo , Tentilhões , Feminino , Animais , Masculino , Tentilhões/fisiologia , Aromatase , Reprodutibilidade dos Testes , Vocalização Animal/fisiologia , Estimulação Acústica , Vias Auditivas/fisiologia , Percepção Auditiva/fisiologia , Córtex Auditivo/fisiologia
10.
Hum Brain Mapp ; 45(4): e26653, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38488460

RESUMO

Face-to-face communication relies on the integration of acoustic speech signals with the corresponding facial articulations. In the McGurk illusion, an auditory /ba/ phoneme presented simultaneously with a facial articulation of a /ga/ (i.e., viseme), is typically fused into an illusory 'da' percept. Despite its widespread use as an index of audiovisual speech integration, critics argue that it arises from perceptual processes that differ categorically from natural speech recognition. Conversely, Bayesian theoretical frameworks suggest that both the illusory McGurk and the veridical audiovisual congruent speech percepts result from probabilistic inference based on noisy sensory signals. According to these models, the inter-sensory conflict in McGurk stimuli may only increase observers' perceptual uncertainty. This functional magnetic resonance imaging (fMRI) study presented participants (20 male and 24 female) with audiovisual congruent, McGurk (i.e., auditory /ba/ + visual /ga/), and incongruent (i.e., auditory /ga/ + visual /ba/) stimuli along with their unisensory counterparts in a syllable categorization task. Behaviorally, observers' response entropy was greater for McGurk compared to congruent audiovisual stimuli. At the neural level, McGurk stimuli increased activations in a widespread neural system, extending from the inferior frontal sulci (IFS) to the pre-supplementary motor area (pre-SMA) and insulae, typically involved in cognitive control processes. Crucially, in line with Bayesian theories these activation increases were fully accounted for by observers' perceptual uncertainty as measured by their response entropy. Our findings suggest that McGurk and congruent speech processing rely on shared neural mechanisms, thereby supporting the McGurk illusion as a valid measure of natural audiovisual speech perception.


Assuntos
Ilusões , Percepção da Fala , Humanos , Masculino , Feminino , Percepção Auditiva/fisiologia , Fala/fisiologia , Ilusões/fisiologia , Percepção Visual/fisiologia , Teorema de Bayes , Incerteza , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Estimulação Luminosa/métodos
11.
Sci Rep ; 14(1): 7177, 2024 03 26.
Artigo em Inglês | MEDLINE | ID: mdl-38531940

RESUMO

Visual modulation of the auditory system is not only a neural substrate for multisensory processing, but also serves as a backup input underlying cross-modal plasticity in deaf individuals. Event-related potential (ERP) studies in humans have provided evidence of a multiple-stage audiovisual interactions, ranging from tens to hundreds of milliseconds after the presentation of stimuli. However, it is still unknown if the temporal course of visual modulation in the auditory ERPs can be characterized in animal models. EEG signals were recorded in sedated cats from subdermal needle electrodes. The auditory stimuli (clicks) and visual stimuli (flashes) were timed by two independent Poison processes and were presented either simultaneously or alone. The visual-only ERPs were subtracted from audiovisual ERPs before being compared to the auditory-only ERPs. N1 amplitude showed a trend of transiting from suppression-to-facilitation with a disruption at ~ 100-ms flash-to-click delay. We concluded that visual modulation as a function of SOA with extended range is more complex than previously characterized with short SOAs and its periodic pattern can be interpreted with "phase resetting" hypothesis.


Assuntos
Potenciais Evocados Auditivos , Percepção Visual , Animais , Humanos , Percepção Visual/fisiologia , Estimulação Acústica , Potenciais Evocados Auditivos/fisiologia , Potenciais Evocados/fisiologia , Percepção Auditiva/fisiologia , Estimulação Luminosa , Eletroencefalografia , Potenciais Evocados Visuais
12.
Anim Cogn ; 27(1): 8, 2024 Mar 02.
Artigo em Inglês | MEDLINE | ID: mdl-38429588

RESUMO

Predation risk may affect the foraging behavior of birds. However, there has been little research on the ability of domestic birds to perceive predation risk and thus adjust their feeding behavior. In this study, we tested whether domestic budgerigars (Melopsittacus undulatus) perceived predation risk after the presentation of specimens and sounds of sparrowhawks (Accipiter nisus), domestic cats (Felis catus), and humans, and whether this in turn influenced their feeding behavior. When exposed to visual or acoustic stimuli, budgerigars showed significantly longer latency to feed under sparrowhawk, domestic cat, and human treatments than with controls. Budgerigars responded more strongly to acoustic stimuli than visual stimuli, and they showed the longest latency to feed and the least number of feeding times in response to sparrowhawk calls. Moreover, budgerigars showed shorter latency to feed and greater numbers of feeding times in response to human voices than to sparrowhawk or domestic cat calls. Our results suggest that domestic budgerigars may identify predation risk through visual or acoustic signals and adjust their feeding behavior accordingly.


Assuntos
Percepção Auditiva , Melopsittacus , Humanos , Animais , Gatos , Percepção Auditiva/fisiologia , Melopsittacus/fisiologia , Comportamento Predatório , Acústica , Som
13.
Int J Psychophysiol ; 199: 112328, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38458383

RESUMO

According to the arousal-mood hypothesis, changes in arousal and mood when exposed to auditory stimulation underlie the detrimental effects or improvements in cognitive performance. Findings supporting or against this hypothesis are, however, often based on subjective ratings of arousal rather than autonomic/physiological indices of arousal. To assess the arousal-mood hypothesis, we carried out a systematic review of the literature on 31 studies investigating cardiac, electrodermal, and pupillometry measures when exposed to different types of auditory stimulation (music, ambient noise, white noise, and binaural beats) in relation to cognitive performance. Our review suggests that the effects of music, noise, or binaural beats on cardiac, electrodermal, and pupillometry measures in relation to cognitive performance are either mixed or insufficient to draw conclusions. Importantly, the evidence for or against the arousal-mood hypothesis is at best indirect because autonomic arousal and cognitive performance are often considered separately. Future research is needed to directly evaluate the effects of auditory stimulation on autonomic arousal and cognitive performance holistically.


Assuntos
Música , Humanos , Estimulação Acústica , Música/psicologia , Nível de Alerta/fisiologia , Atenção , Cognição , Percepção Auditiva/fisiologia
14.
Cortex ; 174: 1-18, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38484435

RESUMO

Hearing-in-noise (HIN) ability is crucial in speech and music communication. Recent evidence suggests that absolute pitch (AP), the ability to identify isolated musical notes, is associated with HIN benefits. A theoretical account postulates a link between AP ability and neural network indices of segregation. However, how AP ability modulates the brain activation and functional connectivity underlying HIN perception remains unclear. Here we used functional magnetic resonance imaging to contrast brain responses among a sample (n = 45) comprising 15 AP musicians, 15 non-AP musicians, and 15 non-musicians in perceiving Mandarin speech and melody targets under varying signal-to-noise ratios (SNRs: No-Noise, 0, -9 dB). Results reveal that AP musicians exhibited increased activation in auditory and superior frontal regions across both HIN domains (music and speech), irrespective of noise levels. Notably, substantially higher sensorimotor activation was found in AP musicians when the target was music compared to speech. Furthermore, we examined AP effects on neural connectivity using psychophysiological interaction analysis with the auditory cortex as the seed region. AP musicians showed decreased functional connectivity with the sensorimotor and middle frontal gyrus compared to non-AP musicians. Crucially, AP differentially affected connectivity with parietal and frontal brain regions depending on the HIN domain being music or speech. These findings suggest that AP plays a critical role in HIN perception, manifested by increased activation and functional independence between auditory and sensorimotor regions for perceiving music and speech streams.


Assuntos
Córtex Auditivo , Música , Percepção da Fala , Humanos , Encéfalo/fisiologia , Percepção Auditiva/fisiologia , Audição , Córtex Auditivo/fisiologia , Mapeamento Encefálico , Percepção da Fala/fisiologia , Percepção da Altura Sonora/fisiologia , Estimulação Acústica
15.
J Neurosci ; 44(11)2024 Mar 13.
Artigo em Inglês | MEDLINE | ID: mdl-38331581

RESUMO

Microsaccades are small, involuntary eye movements that occur during fixation. Their role is debated with recent hypotheses proposing a contribution to automatic scene sampling. Microsaccadic inhibition (MSI) refers to the abrupt suppression of microsaccades, typically evoked within 0.1 s after new stimulus onset. The functional significance and neural underpinnings of MSI are subjects of ongoing research. It has been suggested that MSI is a component of the brain's attentional re-orienting network which facilitates the allocation of attention to new environmental occurrences by reducing disruptions or shifts in gaze that could interfere with processing. The extent to which MSI is reflexive or influenced by top-down mechanisms remains debated. We developed a task that examines the impact of auditory top-down attention on MSI, allowing us to disentangle ocular dynamics from visual sensory processing. Participants (N = 24 and 27; both sexes) listened to two simultaneous streams of tones and were instructed to attend to one stream while detecting specific task "targets." We quantified MSI in response to occasional task-irrelevant events presented in both the attended and unattended streams (frequency steps in Experiment 1, omissions in Experiment 2). The results show that initial stages of MSI are not affected by auditory attention. However, later stages (∼0.25 s postevent onset), affecting the extent and duration of the inhibition, are enhanced for sounds in the attended stream compared to the unattended stream. These findings provide converging evidence for the reflexive nature of early MSI stages and robustly demonstrate the involvement of auditory attention in modulating the later stages.


Assuntos
Movimentos Oculares , Percepção Visual , Masculino , Feminino , Humanos , Percepção Visual/fisiologia , Sensação , Som , Percepção Auditiva/fisiologia
16.
Hear Res ; 444: 108965, 2024 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-38364511

RESUMO

Age-related auditory dysfunction, presbycusis, is caused in part by functional changes in the auditory cortex (ACtx) such as altered response dynamics and increased population correlations. Given the ability of cortical function to be altered by training, we tested if performing auditory tasks might benefit auditory function in old age. We examined this by training adult mice on a low-effort tone-detection task for at least six months and then investigated functional responses in ACtx at an older age (∼18 months). Task performance remained stable well into old age. Comparing sound-evoked responses of thousands of ACtx neurons using in vivo 2-photon Ca2+ imaging, we found that many aspects of youthful neuronal activity, including low activity correlations, lower neural excitability, and a greater proportion of suppressed responses, were preserved in trained old animals as compared to passively-exposed old animals. Thus, consistent training on a low-effort task can benefit age-related functional changes in ACtx and may preserve many aspects of auditory function.


Assuntos
Córtex Auditivo , Presbiacusia , Camundongos , Animais , Córtex Auditivo/fisiologia , Envelhecimento/fisiologia , Audição , Som , Estimulação Acústica , Percepção Auditiva/fisiologia
17.
Neurorehabil Neural Repair ; 38(4): 257-267, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38339993

RESUMO

OBJECTIVE: Increasing perceptual load alters behavioral outcomes in post-stroke fatigue (PSF). While the effect of perceptual load on top-down attentional processing is known, here we investigate if increasing perceptual load modulates bottom-up attentional processing in a fatigue dependent manner. METHODS: In this cross-sectional observational study, in 29 first-time stroke survivors with no clinical depression, an auditory oddball task consisting of target, standard, and novel tones was performed in conditions of low and high perceptual load. Electroencephalography was used to measure auditory evoked potentials. Perceived effort was rated using the visual analog scale at regular intervals during the experiment. Fatigue was measured using the fatigue severity scale. The effect of fatigue and perceptual load on behavior (response time, accuracy, and effort rating) and auditory evoked potentials (amplitude and latency) was examined using mixed model ananlysis of variances (ANOVA). RESULTS: Response time was prolonged with greater perceptual load and fatigue. There was no effect of load or fatigue on accuracy. Greater effort was reported with higher perceptual load both in high and low fatigue. p300a amplitude of auditory evoked potentials (AEP) for novel stimuli was attenuated in high fatigue with increasing load when compared to low fatigue. Latency of p300a was longer in low fatigue with increasing load when compared to high fatigue. There were no effects on p300b components, with smaller N100 in high load conditions. INTERPRETATION: High fatigue specific modulation of p300a component of AEP with increasing load is indicative of distractor driven alteration in orienting response, suggestive of compromise in bottom-up selective attention in PSF.


Assuntos
Atenção , Potenciais Evocados Auditivos , Humanos , Estudos Transversais , Atenção/fisiologia , Eletroencefalografia , Tempo de Reação/fisiologia , Fadiga , Potenciais Evocados/fisiologia , Percepção Auditiva/fisiologia
18.
Eur J Neurosci ; 59(8): 2059-2074, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38303522

RESUMO

Linear models are becoming increasingly popular to investigate brain activity in response to continuous and naturalistic stimuli. In the context of auditory perception, these predictive models can be 'encoding', when stimulus features are used to reconstruct brain activity, or 'decoding' when neural features are used to reconstruct the audio stimuli. These linear models are a central component of some brain-computer interfaces that can be integrated into hearing assistive devices (e.g., hearing aids). Such advanced neurotechnologies have been widely investigated when listening to speech stimuli but rarely when listening to music. Recent attempts at neural tracking of music show that the reconstruction performances are reduced compared with speech decoding. The present study investigates the performance of stimuli reconstruction and electroencephalogram prediction (decoding and encoding models) based on the cortical entrainment of temporal variations of the audio stimuli for both music and speech listening. Three hypotheses that may explain differences between speech and music stimuli reconstruction were tested to assess the importance of the speech-specific acoustic and linguistic factors. While the results obtained with encoding models suggest different underlying cortical processing between speech and music listening, no differences were found in terms of reconstruction of the stimuli or the cortical data. The results suggest that envelope-based linear modelling can be used to study both speech and music listening, despite the differences in the underlying cortical mechanisms.


Assuntos
Música , Percepção da Fala , Percepção Auditiva/fisiologia , Fala , Percepção da Fala/fisiologia , Eletroencefalografia , Estimulação Acústica
19.
J Neurosci ; 44(15)2024 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-38388426

RESUMO

Real-world listening settings often consist of multiple concurrent sound streams. To limit perceptual interference during selective listening, the auditory system segregates and filters the relevant sensory input. Previous work provided evidence that the auditory cortex is critically involved in this process and selectively gates attended input toward subsequent processing stages. We studied at which level of auditory cortex processing this filtering of attended information occurs using functional magnetic resonance imaging (fMRI) and a naturalistic selective listening task. Forty-five human listeners (of either sex) attended to one of two continuous speech streams, presented either concurrently or in isolation. Functional data were analyzed using an inter-subject analysis to assess stimulus-specific components of ongoing auditory cortex activity. Our results suggest that stimulus-related activity in the primary auditory cortex and the adjacent planum temporale are hardly affected by attention, whereas brain responses at higher stages of the auditory cortex processing hierarchy become progressively more selective for the attended input. Consistent with these findings, a complementary analysis of stimulus-driven functional connectivity further demonstrated that information on the to-be-ignored speech stream is shared between the primary auditory cortex and the planum temporale but largely fails to reach higher processing stages. Our findings suggest that the neural processing of ignored speech cannot be effectively suppressed at the level of early cortical processing of acoustic features but is gradually attenuated once the competing speech streams are fully segregated.


Assuntos
Córtex Auditivo , Percepção da Fala , Humanos , Córtex Auditivo/diagnóstico por imagem , Córtex Auditivo/fisiologia , Percepção da Fala/fisiologia , Lobo Temporal , Imageamento por Ressonância Magnética , Atenção/fisiologia , Percepção Auditiva/fisiologia , Estimulação Acústica
20.
Nat Neurosci ; 27(4): 758-771, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38307971

RESUMO

Primary sensory cortices respond to crossmodal stimuli-for example, auditory responses are found in primary visual cortex (V1). However, it remains unclear whether these responses reflect sensory inputs or behavioral modulation through sound-evoked body movement. We address this controversy by showing that sound-evoked activity in V1 of awake mice can be dissociated into auditory and behavioral components with distinct spatiotemporal profiles. The auditory component began at approximately 27 ms, was found in superficial and deep layers and originated from auditory cortex. Sound-evoked orofacial movements correlated with V1 neural activity starting at approximately 80-100 ms and explained auditory frequency tuning. Visual, auditory and motor activity were expressed by different laminar profiles and largely segregated subsets of neuronal populations. During simultaneous audiovisual stimulation, visual representations remained dissociable from auditory-related and motor-related activity. This three-fold dissociability of auditory, motor and visual processing is central to understanding how distinct inputs to visual cortex interact to support vision.


Assuntos
Córtex Auditivo , Córtex Visual Primário , Animais , Camundongos , Estimulação Acústica , Estimulação Luminosa , Percepção Visual/fisiologia , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...